Technology Industry Divided over How to Govern AI Development

2023-12-08

00:00 / 00:00
复读宝 RABC v8.0beta 复读机按钮使用说明
播放/暂停
停止
播放时:倒退3秒/复读时:回退AB段
播放时:快进3秒/复读时:前进AB段
拖动:改变速度/点击:恢复正常速度1.0
拖动改变复读暂停时间
点击:复读最近5秒/拖动:改变复读次数
设置A点
设置B点
取消复读并清除AB点
播放一行
停止播放
后退一行
前进一行
复读一行
复读多行
变速复读一行
变速复读多行
LRC
TXT
大字
小字
滚动
全页
1
  • Technology leaders have shown major support for laws to govern artificial intelligence use.
  • 2
  • At the same time, they are seeking to guarantee that any future AI rules work in their favor.
  • 3
  • The technology industry is increasingly divided about how to govern AI.
  • 4
  • One side supports an "open science" method to AI development; the other supports a closed method.
  • 5
  • Facebook parent Meta and IBM recently launched a new group called the AI Alliance.
  • 6
  • The group supports the "open science" method of AI development.
  • 7
  • On the other side are companies such as Google, Microsoft and ChatGPT-maker OpenAI.
  • 8
  • Safety is at the heart of the debate.
  • 9
  • But, tech leaders are also arguing about who should profit from AI developments.
  • 10
  • The term "open-source" comes from a common method of building software in which the code is widely available at no cost. Anyone can examine and make changes to it.
  • 11
  • Open-source AI involves more than just code. Computer scientists differ on how to define "open source."
  • 12
  • They say the identifications are dependent on which parts of the technology are publicly available and if there are restrictions on use.
  • 13
  • Some computer scientists use the term "open science" to describe the wider philosophy.
  • 14
  • IBM and Meta lead the AI Alliance. Members include Dell, Sony, chipmakers AMD and Intel, and several universities and smaller AI companies.
  • 15
  • The alliance is coming together to say "that the future of AI is going to be built ... on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies," said Darío Gil of IBM.
  • 16
  • Gil made the comment in a discussion with The Associated Press.
  • 17
  • Part of the confusion about open-source AI is that the company that built ChatGPT and the image-generator DALL-E is called OpenAI. But its AI systems are closed.
  • 18
  • "There are near-term and commercial incentives against open source," said Ilya Sutskever, OpenAI's chief scientist and co-founder, in a video with Stanford University in April.
  • 19
  • But there is also a longer-term worry about the open development method.
  • 20
  • Sutskever noted one worry is that an AI system with powerful abilities could be too dangerous to make available to the public.
  • 21
  • For example, he described a possible AI system that could learn how to start its own biological laboratory.
  • 22
  • Even current AI models present risks.
  • 23
  • They could create disinformation campaigns, for example, said David Evan Harris of the University of California, Berkeley.
  • 24
  • Such campaigns could disrupt democratic elections, he said.
  • 25
  • "Open source is really great in so many dimensions of technology," but AI is different, Harris said.
  • 26
  • The Center for Humane Technology, a longtime critic of Meta's social media activities, is among the groups drawing attention to the risks of open-source or leaked AI models.
  • 27
  • "As long as there are no guardrails in place right now, it's just completely irresponsible to be deploying these models to the public," said the group's Camille Carlton.
  • 28
  • An increasingly public debate has appeared over the good and bad of using an open-source method to AI development.
  • 29
  • Meta's chief AI scientist, Yann LeCun, this fall criticized OpenAI, Google, and Anthropic on social media for what he described as "massive corporate lobbying."
  • 30
  • Le Cun argues that the companies are trying to write rules in a way that help their high-performing AI models and could help them hold their power over the technology's development.
  • 31
  • The three companies, along with OpenAI's key partner Microsoft, have formed their own industry group called the Frontier Model Forum.
  • 32
  • LeCun said on X, formerly Twitter, "Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture."
  • 33
  • For IBM, the dispute feeds into a much longer competition that began before the AI boom.
  • 34
  • IBM was an early supporter of the open-source Linux operating system in the 1990s.
  • 35
  • Chris Padilla leads IBM's international government affairs team.
  • 36
  • The companies are trying to raise fear about open-source innovation as they have in the past, he suggested.
  • 37
  • He added, "I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They're taking a similar approach here."
  • 38
  • I'm John Russell.
  • 1
  • Technology leaders have shown major support for laws to govern artificial intelligence use. At the same time, they are seeking to guarantee that any future AI rules work in their favor.
  • 2
  • The technology industry is increasingly divided about how to govern AI. One side supports an "open science" method to AI development; the other supports a closed method.
  • 3
  • Facebook parent Meta and IBM recently launched a new group called the AI Alliance. The group supports the "open science" method of AI development. On the other side are companies such as Google, Microsoft and ChatGPT-maker OpenAI.
  • 4
  • Safety is at the heart of the debate. But, tech leaders are also arguing about who should profit from AI developments.
  • 5
  • What is open-source AI?
  • 6
  • The term "open-source" comes from a common method of building software in which the code is widely available at no cost. Anyone can examine and make changes to it.
  • 7
  • Open-source AI involves more than just code. Computer scientists differ on how to define "open source." They say the identifications are dependent on which parts of the technology are publicly available and if there are restrictions on use.
  • 8
  • Some computer scientists use the term "open science" to describe the wider philosophy.
  • 9
  • IBM and Meta lead the AI Alliance. Members include Dell, Sony, chipmakers AMD and Intel, and several universities and smaller AI companies. The alliance is coming together to say "that the future of AI is going to be built ... on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies," said Darío Gil of IBM. Gil made the comment in a discussion with The Associated Press.
  • 10
  • Concerns about open-source AI
  • 11
  • Part of the confusion about open-source AI is that the company that built ChatGPT and the image-generator DALL-E is called OpenAI. But its AI systems are closed.
  • 12
  • "There are near-term and commercial incentives against open source," said Ilya Sutskever, OpenAI's chief scientist and co-founder, in a video with Stanford University in April.
  • 13
  • But there is also a longer-term worry about the open development method. Sutskever noted one worry is that an AI system with powerful abilities could be too dangerous to make available to the public.
  • 14
  • For example, he described a possible AI system that could learn how to start its own biological laboratory.
  • 15
  • Even current AI models present risks. They could create disinformation campaigns, for example, said David Evan Harris of the University of California, Berkeley. Such campaigns could disrupt democratic elections, he said.
  • 16
  • "Open source is really great in so many dimensions of technology," but AI is different, Harris said.
  • 17
  • The Center for Humane Technology, a longtime critic of Meta's social media activities, is among the groups drawing attention to the risks of open-source or leaked AI models.
  • 18
  • "As long as there are no guardrails in place right now, it's just completely irresponsible to be deploying these models to the public," said the group's Camille Carlton.
  • 19
  • Benefits and dangers
  • 20
  • An increasingly public debate has appeared over the good and bad of using an open-source method to AI development.
  • 21
  • Meta's chief AI scientist, Yann LeCun, this fall criticized OpenAI, Google, and Anthropic on social media for what he described as "massive corporate lobbying." Le Cun argues that the companies are trying to write rules in a way that help their high-performing AI models and could help them hold their power over the technology's development. The three companies, along with OpenAI's key partner Microsoft, have formed their own industry group called the Frontier Model Forum.
  • 22
  • LeCun said on X, formerly Twitter, "Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture."
  • 23
  • For IBM, the dispute feeds into a much longer competition that began before the AI boom. IBM was an early supporter of the open-source Linux operating system in the 1990s.
  • 24
  • Chris Padilla leads IBM's international government affairs team. The companies are trying to raise fear about open-source innovation as they have in the past, he suggested.
  • 25
  • He added, "I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They're taking a similar approach here."
  • 26
  • I'm John Russell.
  • 27
  • Matt O'Brien reported on this story for the Associated Press. John Russell adapted it for VOA Learning English.
  • 28
  • __________________________________________________
  • 29
  • Words in This Story
  • 30
  • innovation - n. the act of introducing new ideas, devices, or methods
  • 31
  • incentive - n. something that encourages a person to do something
  • 32
  • disrupt - v. to interrupt the normal progress or activity of something
  • 33
  • dimension - n. a part of something
  • 34
  • guardrail - n. a protective device along the side of a road that prevents vehicles from driving off the road (can be used metaphorically)
  • 35
  • lobby - v. to try to influence government officials to make decisions for or against something